verl: Flexible and Efficient RL for LLMs

Yuxuan Tong (童雨轩)

ByteDance Seed & Tsinghua University

2025/05/24

1 Background: Why is Large-Scale RL Important?

1.1 Learning to Reason with Large-Scale RL

Learning to Reason with Large-Scale RL significantly boosts the performance of LLMs.
Model Large-Scale RL? AIME 2024 MATH 500 GPQA Diamond Code Forces
GPT-4o (OpenAI 2024) 44.6 60.3 50.6 >11.0%
o1 (OpenAI 2024) 74.4 94.8 77.3 >89.0%
R1 (DeepSeek-AI 2025) 79.8 97.3 71.5 >96.3%

1.2 Learning as Agent with Large-Scale RL

OpenAI (2025):

Deep research independently discovers, reasons about, and consolidates insights from across the web.

To accomplish this, it was trained on real-world tasks requiring browser and Python tool use,

using the same reinforcement learning methods behind OpenAI o1, our first reasoning model.

Check OpenAI Deep Research’s demo video for more details.

1.3 Future: Agent + Tool Protocol = Versatile

MCP as an example of tool protocol.

MCP as an example of tool protocol.

2 Background: Why is Large-Scale RL Challenging?

2.1 Complex: RL as Dataflow Graph

We can model Reinforcement Learning (RL) as a complex dataflow graph, consisting of:

  1. multiple models: actor, critic, reference, reward model, etc.
  2. multiple stages: generating, preparing experiences, training
  3. multiple workloads: generation, inference, training

2.2 More Complex: RL with LLMs

RL with Large Language Models (LLMs) is even more challenging.

2.3 Even More Complex: Flexible & Efficient Implementation

3 Why verl?

3.1 Flexibility: RL Algorithms in Lines

Listing 1: PPO core code.
for prompts in dataloader:
    # Stage 1: Sampling Trajectories
    batch = actor.generate_sequences(prompts)
    # Stage 2: Preparing Experiences
    batch = reward.compute_reward(batch)
    batch = reference.compute_log_prob(batch)
    batch = critic.compute_values(batch)
    batch = compute_advantage(batch, "gae")
    # Stage 3: Training
    critic.update_critic(batch)
    actor.update_actor(batch)

3.2 Efficiency: Minimized Overhead & Maximized GPU Utilization

3.2.1 Within Workload: Efficient Implementation Based on “Multi-Controller”

Parallelism Algorithms:

  • Data Parallelism
  • Tensor Parallelism
  • Pipeline Parallelism
  • Context / Sequence Parallelism

Training Backend:

  • FSDP & FSDP2
  • Megatron

Generation Backend:

  • vLLM
  • SGLang

3.2.2 Between Workloads: High GPU Utilization Based on Hybrid Engine

  • offloading & reloading, fully utilizing the GPU memory
  • switching for the optimal parallelism strategy

3.3 Open-Source Community

3.3.1 Extensive Impact

Up to 2025/05/24, verl has:

  • 8.4k stars
  • 1k forks
  • 894 PRs
  • 176 contributors

3.3.2 Easy for Extension

Many projects are built on top of verl, including:

4 Paradigm: HybridFlow (Sheng et al. 2024)

4.1 Background: Single-Controller vs. Multi-Controller

Figure 6: Single-Controller (Multi-Program-Multi-Data, MPMD) vs. Multi-Controller (Single-Program-Multi-Data, SPMD) (Barham et al. 2022)
  • Single-Controller (MPMD): A centralized controller manages all the workers, running different programs.
  • Multi-Controller (SPMD): Each worker has its own controller, running the same program with different data.

4.2 Trade-off: Single-Controller or Multi-Controller?

Paradigm Pro Con
Single-Controller Flexible Commnucation Overhead
Multi-Controller Efficient Complex Programming

Which paradigm should we choose?

We can have both!

4.3 New Paradigm: Hybrid-Controller!

Hybrid-Controller = Single-Controller + N x Multi-Controller

4.4 Implementation in verl

Listing 2: PPO algorithm in single-controller.
for prompts in dataloader:
    # Stage 1: Sampling Trajectories
    batch = actor.generate_sequences(prompts)
    # Stage 2: Preparing Experiences
    batch = reward.compute_reward(batch)
    batch = reference.compute_log_prob(batch)
    batch = critic.compute_values(batch)
    batch = compute_advantage(batch, "gae")
    # Stage 3: Training
    critic.update_critic(batch)
    actor.update_actor(batch)
Listing 3: Example distributed computation in multi-controller.
class CriticWorker(3DParallelWorker):
  @register(dispatch_mode=3D_PROTO)
  def compute_values(self, batch: DataProto):
      values = self.critic.forward(batch)
      batch.update(values=values)
# ...
class ActorWorker(3DParallelWorker):
  @register(dispatch_mode=3D_PROTO)
  def update_actor(self, batch: DataProto):
      loss = self.actor(batch)
      loss.backward()

5 Latest Updates & Roadmap

5.1 Async Engine for Multi-Turn Generation (Upstreamed)

Implementation Sched. Unit Sync. Time
Synchronous ynBatch Each Turn
Asynchronous Request Whole Batch

5.2 Efficient RL with Huge MoE like DeepSeek-V3-671B (ETA: Late May’25)

verl is working on supporting efficient RL training for huge MoE like DeepSeek-V3-671B, based on the following features:

  1. MoE models with GPTModel class for actor and critic
  2. Multi-node inference
  3. Parameter sharding manager for Megatron-Core V0.12 + latest version of inference engines

For more details, please check our tracker #708.

5.3 Agentic RL with Diverse Environments & Tools (Planned)

  1. Our ongoing RFC
  2. Integrating protocols like MDP
  3. Integrating existing environments & tools, e.g. 

5.4 Other Plans

  1. Partial Rollout (Kimi Team 2025)
  2. Multi-Token-Prediction (MTP) (Gloeckle et al. 2024)

For the most timely updates of important features, please keep an eye on verl’s Roadmap.

Thanks for Listening!

Welcome to join the verl community to discuss and contribute

@ Repo: https://github.com/volcengine/verl

Appendix

6 Introduction to Features in verl

6.1 Sequence Packing

  1. Remove padding tokens and packs multiple data sequences into a row
  2. Tweak the attention mask & position IDs to avoid cross-contamination

To enable this, use use_remove_padding.

6.2 DP Balancing

6.2.1 Load Imbalance in DP

  • Parallelism usually needs synchronization between different ranks.
  • Data Parallelism (DP) like ZeRO is the most commonly used parallelism strategy.
  • However, DP performance might be damaged by load imbalance, which is especially severe in long-context training.

6.2.2 Balancing across DP Ranks

  • balance the valid tokens dispatched to each rank
  • by reordering the samples in each batch

To enable this, use balance_batch.

6.2.3 Balancing across Micro Batches

However, in gradient accumulation,

  • it’s not enough to only balance valid tokens in a batch,
  • since DP syncs in the unit of micro batch.

To resolve this, verl supports to

  • balance the valid tokens across micro batches
  • by evenly deviding the data sequences in the batch before packing into micro batches

To enable this, use use_dynamic_bsz.

6.3 Other Features

  1. Multi-Model LLMs’ RL
  2. Full support for RL with AMD (ROCm Kernel) hardwares
  3. Gradient Checkpointing (enable_gradient_checkpointing)
  4. Torch Compile (use_torch_compile)
  5. Liger Kernel (use_liger)

7 Programming Guide

7.1 Customizing the Dataset

A canonical RL dataset in verl has the following fields:

  • prompt: a list of messages {"role": "...", "content": "..."}
  • data_source: used to choose the reward function
  • reward_model: a dict containing
    • "ground_truth"
    • "style" like "model" or "rule"
  • (Optional) extra_info: a dict containing extra information

For VLM RL, verl expects fields "images" and/or "videos"

For examples, please check the examples/data_preprocess.

You could also customize the field names via config. Please check the data section in config files like ppo_trainer.yaml for more details.

For further customization, verl provides the data.custom_cls config,

Listing 4: Config for custom dataset class.
data:
  custom_cls:
    path: null # path to the `.py` file containing the `class` definition
    name: null # the `class` name

An example CLI config could be:

Listing 5: Example config for custom dataset class.
--data.custom_cls.path=./examples/dataset/custom_dataset.py \
--data.custom_cls.name=CustomDataset

The custom dataset class defined in the .py file is required to accept the following initialization parameters:

Listing 6: Custom dataset class initialization.
class CustomDataset: # You could also inherit from `RLHFDataset`
  def __init__(
      self,
      data_files: Union[str, List[str]],
      tokenizer: PreTrainedTokenizer,
      config: DictConfig,
      processor: Optional[ProcessorMixin] = None,
  ):
      ...

7.2 Customizing the Reward

verl allows to define custom reward function via the custom_reward_function config:

Listing 7: Config for custom reward function.
custom_reward_function:
  path: null # path to the `.py` file containing the function definition
  name: compute_score # the function name after `def`
reward_model:
  reward_manager: naive

An example CLI config could be:

Listing 8: Example config for custom reward function.
--custom_reward_function.path=./examples/reward_fn/custom_reward_fn.py \
--custom_reward_function.name=compute_score \
--reward_model.reward_manager=naive

The function defined in .py should accept the parameters passed from the reward manager __call__ method. Taking NaiveRewardManager as an example:

Listing 9: How a reward function is called in NaiveRewardManager.
class NaiveRewardManager:
    def __call__(self, data: DataProto, return_dict: bool=False):
        # Preprocessing for the input data
        score = self.compute_score(
            data_source=data_source,
            solution_str=solution_str,
            ground_truth=ground_truth,
            extra_info=extra_info,
        )
        # Other processing for the final `reward`

For more complex features, you can also add a new reward manager like PRIMERewardManager or DAPORewardManager.

7.3 Customizing the Loss Function

To modify the loss function, the most convenient way is to

  1. search for the .backward() call
  2. modify functions like compute_policy_loss
  3. or add loss terms like entropy_loss

For example, the DataParallelPPOActor.update_policy method defines the loss function as follows:

Listing 10: Simplified loss function definition in DataParallelPPOActor.
class DataParallelPPOActor(BasePPOActor):
    def update_policy(self, data: DataProto):
        pg_loss = compute_policy_loss(
            old_log_prob=old_log_prob, log_prob=log_prob,
            advantages=advantages, # ...
        )
        entropy_loss = agg_loss(loss_mat=entropy)
        policy_loss = pg_loss - entropy_loss * entropy_coeff
        kld = kl_penalty(
            logprob=log_prob, ref_logprob=ref_log_prob, # ...
        )
        kl_loss = agg_loss(loss_mat=kld)
        policy_loss = policy_loss + kl_loss * self.config.kl_loss_coef
        loss.backward()

7.4 Customizing the Training Logic

As mentioned above, the main training logic is concentrated in the fit function of the trainer classes like RayPPOTrainer.

For example, the DAPORayTrainer class overrides the fit function to implement the “dynamic sampling” feature:

(See the next slide for the code ➡️)

Listing 11: Simplified fit function in DAPORayTrainer, with dynamic sampling highlighted.
class RayDAPOTrainer(RayPPOTrainer):
  def fit(self):
    for epoch in range(self.config.trainer.total_epochs):
      batch = None
      for batch_dict in self.train_dataloader:
        new_batch = DataProto.from_single_dict(batch_dict)
        num_gen_batches += 1
        gen_batch_output = self.actor_rollout_wg.generate_sequences(gen_batch)
        new_batch = new_batch.union(gen_batch_output)
        if not self.config.algorithm.filter_groups.enable:
          batch = new_batch
        else:
          # Getting `kept_traj_idxs` ...
          new_batch = new_batch[kept_traj_idxs]
          batch = new_batch if batch is None else DataProto.concat([batch, new_batch])
          prompt_bsz = self.config.data.train_batch_size
          if num_prompt_in_batch < prompt_bsz:
            max_num_gen_batches = self.config.algorithm.filter_groups.max_num_gen_batches
            if max_num_gen_batches <= 0 or num_gen_batches < max_num_gen_batches:
                continue
          else:
            traj_bsz = self.config.data.train_batch_size * self.config.actor_rollout_ref.rollout.n
            batch = batch[:traj_bsz]
        # ...

8 About

8.1 Presenter Contact

Barham, Paul, Aakanksha Chowdhery, Jeff Dean, Sanjay Ghemawat, Steven Hand, Daniel Hurt, Michael Isard, et al. 2022. “Pathways: Asynchronous Distributed Dataflow for ML.” Proceedings of Machine Learning and Systems 4 (April): 430–49. https://proceedings.mlsys.org/paper_files/paper/2022/hash/37385144cac01dff38247ab11c119e3c-Abstract.html.
Dakota Mahan, Teknium, Roger Jin. 2025. Atropos - An Async First Environment Rollout Controller.” https://www.github.com/NousResearch/Atropos.
DeepSeek-AI. 2025. “DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning.” https://arxiv.org/abs/2501.12948.
Gloeckle, Fabian, Badr Youbi Idrissi, Baptiste Roziere, David Lopez-Paz, and Gabriel Synnaeve. 2024. “Better & Faster Large Language Models via Multi-Token Prediction.” In Forty-First International Conference on Machine Learning. https://openreview.net/forum?id=pEWAcejiU2.
Kimi Team. 2025. “Kimi K1.5: Scaling Reinforcement Learning with LLMs.” https://arxiv.org/abs/2501.12599.
OpenAI. 2024. “Learning to Reason with LLMs.” OpenAI Blog. https://openai.com/index/learning-to-reason-with-llms/.
———. 2025. “Introducing Deep Research.” OpenAI Blog. https://openai.com/index/introducing-deep-research/.
Sheng, Guangming, Chi Zhang, Zilingfeng Ye, Xibin Wu, Wang Zhang, Ru Zhang, Yanghua Peng, Haibin Lin, and Chuan Wu. 2024. “HybridFlow: A Flexible and Efficient RLHF Framework.” arXiv Preprint arXiv: 2409.19256.
Shi, Jiajun, Jian Yang, Jiaheng Liu, Xingyuan Bu, Jiangjie Chen, Junting Zhou, Kaijing Ma, et al. 2025. “KORGym: A Dynamic Game Platform for LLM Reasoning Evaluation.” https://arxiv.org/abs/2505.14552.